3D人类行动的点云序列表现出无序的帧内空间信息和订购的帧间时间信息。为了捕获点云序列的时空结构,通常构造围绕质心周围的跨框架时空局部邻域。然而,时空本地社区的计算昂贵的施工过程严重限制了模型的平行性。此外,在时空局部学习中同样地处理空间和时间信息是不合理的,因为人类的动作沿空间尺寸复杂并且沿着时间尺寸简单。在本文中,为了避免时空局部编码,我们提出了一个强的并行化点云序列网络,称为用于3D动作识别的顺序点。顺序pointNet由两个串行模块,即帧内外观编码模块和帧间运动编码模块组成。为了对人类动作的强空间结构进行建模,每个点云帧在帧内帧内外观编码模块中并行处理,并且每个帧的特征向量被输出以形成特征向量序列,其表征沿时间维度的静态外观变化的变化。为了对人类动作的弱时间变化进行建模,在帧间运动编码模块中,在特征向量序列上实现时间位置编码和分层金字塔汇集策略。另外,为了更好地探索时空内容,在执行端到端的3D动作识别之前聚合人类运动的多个级别特征。在三个公共数据集上进行的广泛实验表明,序贯POINTNETNET优于最新的方法。
translated by 谷歌翻译
Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i.e., untrained networks). However, the presence of such untrained subnetworks in graph neural networks (GNNs) still remains mysterious. In this paper we carry out the first-of-its-kind exploration of discovering matching untrained GNNs. With sparsity as the core tool, we can find \textit{untrained sparse subnetworks} at the initialization, that can match the performance of \textit{fully trained dense} GNNs. Besides this already encouraging finding of comparable performance, we show that the found untrained subnetworks can substantially mitigate the GNN over-smoothing problem, hence becoming a powerful tool to enable deeper GNNs without bells and whistles. We also observe that such sparse untrained subnetworks have appealing performance in out-of-distribution detection and robustness of input perturbations. We evaluate our method across widely-used GNN architectures on various popular datasets including the Open Graph Benchmark (OGB).
translated by 谷歌翻译
时间动作本地化旨在预测未修剪长视频中每个动作实例的边界和类别。基于锚或建议的大多数先前方法忽略了整个视频序列中的全局本地上下文相互作用。此外,他们的多阶段设计无法直接生成动作边界和类别。为了解决上述问题,本文提出了一种新颖的端到端模型,称为自适应感知变压器(简称apperformer)。具体而言,Adaperformer探索了双支球多头的自我发项机制。一个分支会照顾全球感知的关注,该注意力可以模拟整个视频序列并汇总全球相关环境。而其他分支集中于局部卷积转移,以通过我们的双向移动操作来汇总框架内和框架间信息。端到端性质在没有额外步骤的情况下产生视频动作的边界和类别。提供了广泛的实验以及消融研究,以揭示我们设计的有效性。我们的方法在Thumos14数据集上实现了最先进的准确性(根据map@0.5、42.6 \%map@0.7和62.7 \%map@avg),并在活动网络上获得竞争性能, -1.3数据集,平均地图为36.1 \%。代码和型号可在https://github.com/soupero/adaperformer上找到。
translated by 谷歌翻译
彩票(LTS)能够发现准确而稀疏的子网,可以隔离训练以匹配密集网络的性能。合奏并行,是机器学习中最古老的预期技巧之一,可以通过结合多个独立模型的输出来提高性能。但是,在LTS背景下,合奏的好处将被稀释,因为合奏并没有直接导致更稀疏的子网,而是利用其预测来做出更好的决定。在这项工作中,我们首先观察到,直接平均相邻学习的子网的权重显着提高了LT的性能。在这一观察结果的鼓励下,我们进一步提出了另一种方法,通过简单的插值策略通过迭代幅度修剪来识别的子网执行“合奏”。我们称我们的方法彩票池。与幼稚的合奏相比,每一个子网都不会带来性能,彩票池比原始LTS产生的稀疏子网稀疏得多,而无需任何额外的培训或推理成本。在CIFAR-10/100和Imagenet上的各种现代体系结构中,我们表明我们的方法在分布和分发场景方面都取得了显着的性能。令人印象深刻的是,用VGG-16和RESNET-18进行评估,生产的子网稀疏的子网在CIFAR-100上优于原始LTS,在CIFAR-100-C上高达1.88%,而CIFAR-100-C则高于2.36%。最终的致密网络超过了CIFAR-100的预训练密集模型,在CIFAR-100-C上超过2.22%。
translated by 谷歌翻译
关于稀疏神经网络训练(稀疏训练)的最新研究表明,通过从头开始训练本质上稀疏的神经网络可以实现绩效和效率之间的令人信服的权衡。现有的稀疏训练方法通常努力在一次跑步中找到最佳的稀疏子网,而无需涉及任何昂贵的密集或预训练步骤。例如,作为最突出的方向之一,动态稀疏训练(DST)能够通过在训练过程中迭代发展稀疏拓扑来实现竞争性训练的竞争性能。在本文中,我们认为最好分配有限的资源来创建多个低损失的稀疏子网并将其超级置于更强的基因,而不是完全分配所有资源以找到单个子网络。为了实现这一目标,需要两个Desiderata:(1)在一个培训过程中有效生产许多低损失的子网,即所谓的廉价门票,仅限于用于密集培训的标准培训时间; (2)将这些廉价的门票有效地超级为一个更强的子网,而无需超越约束参数预算。为了证实我们的猜想,我们提出了一种新颖的稀疏训练方法,称为\ textbf {sup-tickets},可以在单个稀疏到较小的训练过程中同时满足上述两个desiderata。在CIFAR-10/100和Imagenet上的各种现代体系结构中,我们表明,SUP-Tickets与现有的稀疏训练方法无缝集成,并显示出一致的性能提高。
translated by 谷歌翻译
深度神经网络容易受到通过对输入对难以察觉的变化进行制作的对抗性示例。但是,这些对手示例在适用于模型及其参数的白盒设置中最成功。寻找可转移到其他模型或在黑匣子设置中开发的对抗性示例显着更加困难。在本文中,我们提出了可转移的对抗性实例的方向聚集的对抗性攻击。我们的方法在攻击过程中使用聚集方向,以避免产生的对抗性示例在白盒模型上过度拟合。关于Imagenet的广泛实验表明,我们的提出方法显着提高了对抗性实例的可转移性,优于最先进的攻击,特别是对抗对抗性稳健的模型。我们所提出的方法的最佳平均攻击成功率达到94.6 \%,针对三种对手训练模型和94.8%抵御五种防御方法。它还表明,目前的防御方法不会阻止可转移的对抗性攻击。
translated by 谷歌翻译
近年来,由于其在研究和实践中的重要性,对归属网络的异常检测问题有望的兴趣。虽然已经提出了各种方法来解决这个问题,但存在两种主要限制:(1)由于缺乏监控信号,未经监督的方法通常会效率低得多,(2)现有的异常检测方法仅使用本地语境信息来检测异常信息以检测异常信息节点,例如,单跳或两跳信息,但忽略全局上下文信息。由于异常节点与结构和属性中的正常节点不同,因此如果我们删除连接异常和正常节点的边缘,异常节点和其邻居之间的距离应该大于正常节点和其邻居之间的距离直观。因此,基于全局和本地上下文信息的跳数可以作为异常的指标。通过这种直觉激励,我们提出了一种基于跳数的模型(HCM)来通过建模本地和全局上下文信息来检测异常。为了更好地利用异常识别的跳跃计数,我们建议使用跳数预测作为自我监督任务。我们根据HOP计数通过HCM模型设计了两个异常的分数来识别异常。此外,我们雇用贝叶斯学习培训HCM模型,以捕获学习参数的不确定性,避免过度装备。关于现实世界归属网络的广泛实验表明,我们所提出的模型在异常检测中是有效的。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译